Welcome back

It's always good to see you back here! Join us around our campfire.

Remember me

Reset password

Enter the email associated with your account and we'll send an email with instructions to reset your password

Check your email

We have sent a password recover instructions to your email.

Legal experts discuss artificial intelligence regulation and how financial services firms should respond to ChatGPT, Dalle-E and other Gen AI services

Legal experts discuss artificial intelligence regulation and how financial services firms should respond to ChatGPT, Dalle-E and other Gen AI services

Generative AI, colloquially known as Gen AI, is a category of artificial intelligence that uses sophisticated learning algorithms to rapidly generate a range of content, including images, video, audio, text, and code. 

It sits centrally within the wider AI sphere, straddling machine learning, natural language processors (NLP), and deep learning to generate useful content. With competitive pressures rising, and advantages from AI optimisation becoming clearer, companies around the world are now exploring Gen AI in greater detail, and considering how it can benefit their business model.

The most recognisable of the Gen AI tools is ChatGPT. The GPT stands for a ‘Generative Pre-trained Transformer’, which can turn a large collection of unlabelled data, usually scraped from the internet, into articles and stories that closely resemble human-written text. Whilst ChatGPT is in the headlines, it’s not the only tool. For a start, there’s DALL-E, which creates realistic images and art from a description in natural language, and Codex – a tool that generates codes from prompts. 

The regulatory glare intensifies

With these new opportunities comes new risks. The Financial Times reported that Gen AI could help raise global GDP by 7% over a 10 year period, but by doing so could expose 300 million jobs across major markets to automation. It needs to be handled with care, both in terms of its impact on people’s livelihoods and its expansive legal footprint.

Elon Musk, CEO of Tesla and owner of Twitter, recently reaffirmed his view that there should be a “pause” on the development of AI whilst it awaits regulation. This is a view shared by many, which hasn’t escaped the attention of the world’s legislators and policymakers, who are starting to ramp things up.

The use of AI in the European Union (EU) will soon be regulated by the AI Act, the world’s first comprehensive AI law which is anticipated to become a benchmark for other jurisdictions. In June 2023, the European Parliament passed its version of the Act, setting the stage for a final debate on the bill between the European Commission, Council, and Parliament – called the “trilogue”. Amongst other provisions, the new law will promote regulatory sandboxes established by public authorities to test AI before its deployment. 

The UK is following suit. In March 2023, the UK government published a policy paper detailing its plans for implementing a pro-innovation approach to AI regulation. And whilst the United States currently has no comprehensive federal legislation on AI, the Biden administration’s Blueprint for an AI Bill of Rights is likely a sign of further government action to come.

Getting ready for the new world

Whilst awaiting regulatory clarity, it’s understandable that businesses from all sectors are keen to push ahead with harnessing the power of Gen AI to bolster their proposition. This new generation of AI promises a variety of potential benefits for organisations, ranging from improved efficiency and cost reductions, to enhanced customer experience and security. 

But how can forward-looking companies be sure they’re approaching Gen AI with due care? To help demystify Gen AI and its legal implications, we gathered a panel of experts from across our international business for a webinar that provides usable insights into this fast-moving area. If you’re interested in the topic, I’d urge you to watch the full recording to help boost your understanding. Here are some of the key takeaways you may find useful:  

    • Consider the right use cases for your business. Whether its product design, content creation, or fraud detection, the opportunities that this technology opens up are enormous. For example, it’s likely call centres will increasingly implement Gen AI to summarise calls that come in from customers. Gen AI can also be used to improve the field of document intelligence by extracting complex information to create reports and summaries that would previously have had to be done manually. Another application is drawing on Gen AI to prepare a rough first draft of marketing materials. Find the most relevant use case for your specific needs.
    • Create an appropriate internal policy. In a recent survey of 11,700 professionals, 43% said they already use Gen AI for work, with around 70% of this group claiming they’re using ChatGPT and other tools without disclosing it to bosses. So once you’ve determined the right use cases for your business, drafting a comprehensive policy which states how Gen AI can be used, and in what circumstances, is incredibly helpful in setting expectations, managing behaviours, and helping your people to move forward with confidence. 
    • Understand what data sources sit behind the technology. For example, ChatGPT version 3.5 was only trained on data up to 2021, and so didn’t access more recent information. And many Gen AI solutions don’t provide the user with sources, making it much harder to fact check. Data bias is also an important consideration – information predominantly comes from the West, which can open up political or unconscious bias. Gen AI can be very convincing – a bit like an overconfident lawyer who is technically wrong! That can be very dangerous, so users should have a thorough understanding of exactly what they’re asking. 
    • Be aware of the full range of limitations and legal risks. The use of Gen AI tools will be increasingly shaped by evolving regulations, and impacted by considerations regarding intellectual property (IP) laws, data and privacy, cybersecurity, contracts and suppliers, litigation risk, and employment law. If you operate within a regulated sector, you’ll need to check regularly for any guidance provided by your regulator. For example, if you don’t comply with the EU AI Act, the fines are set to 6% of annual global turnover or 30 million euros, whichever is higher. We’re also starting to see more cases regarding IP infringement, such as the recent case where Getty Images has accused Stability AI of breaching its copyright by using its images to “train” its Stable Diffusion system. Stay informed to stay clear of trouble.
    • Moving forward, consider AI part of your ESG strategy. ESG is high on the board agenda for almost all businesses. The ethical use of Gen AI, including reputational risks, will likely come to sit within at least the ‘G’, if not the ‘S’ as well. Are the Gen AI systems you’re using fair and non-discriminatory? Have you tested the outputs for discrimination and disparate impacts toward protected classes? Unsupervised AI systems can generate natural language datasets that are harmful, and so require careful monitoring and a thorough understanding. 

Embracing Gen AI can result in more innovative business models, but there are risks along the way. Identify the right use cases, determine their impact, and develop a plan to tackle them. Your legal advisors will be able to help you navigate the complex regulatory and ethical landscape to ensure that you are integrating this disruptive technology responsibly.

Explore more

Other
Articles

Other Articles

Keep indulging yourself with invaluable content on the latest developments in Open Banking and Open Finance